24 research outputs found

    Using Description Logics for Recognising Textual Entailment

    Get PDF
    The aim of this paper is to show how we can handle the Recognising Textual Entailment (RTE) task by using Description Logics (DLs). To do this, we propose a representation of natural language semantics in DLs inspired by existing representations in first-order logic. But our most significant contribution is the definition of two novel inference tasks: A-Box saturation and subgraph detection which are crucial for our approach to RTE

    Implication textuelle et logiques de description

    Get PDF
    De nombreuses applications du traitement automatique des langues (TAL) font intervenir un traitement sémantique de la langue naturelle : ces applications ont besoin de "comprendre" le sens d'un texte traité (même si elles le font souvent de manière très superficielle). Malheureusement, les méthodes utilisées dans ces applications sont généralement des méthodes ad hoc qui ne permettent pas une modélisation générique du phénomène de compréhension. Le but de ce mémoire est d'explorer dans quelle mesure les logiques de description (LDs) peuvent être utilisées pour représenter le sens d'un texte puis pour raisonner sur ce sens. Plus spécifiquement, ce mémoire explore dans quelle mesure les LDs peuvent être utilisées pour modéliser la reconnaissance d'implications textuelles, c'est-à-dire pour décider étant donnés deux fragments textuels T1 et T2 si oui ou non, T1 implique T2

    Syntactic testsuites and Textual Entailment Recognition

    Get PDF
    International audienceWe focus on textual entailments mediated by syntax and propose a new methodology to evaluate textual entailment recognition systems on such data. The main idea is to generate a syntactically annotated corpus of pairs of (non-)entailments and to use error mining to identify the most likely sources of errors. To illustrate the approach, we apply this methodology to the Afazio RTE system and show how it permits identifying the most likely sources of errors made by this system on a testsuite of 10 000 (non) entailment pairs

    Deep Semantics for Dependency Structures

    Get PDF
    The original publication is available at www.springerlink.comInternational audienceAlthough dependency parsers have become increasingly popular, little work has been done on how to associate dependency structures with deep semantic representations. In this paper, we propose a semantic calculus for dependency structures which can be used to construct deep semantic representations from joint syntactic and semantic dependency structures similar to those used in the ConLL 2008 Shared Task

    Réécriture et détection d'implication textuelle

    Get PDF
    National audienceNous présentons un système de normalisation de la variation syntaxique qui permet de mieux reconnaître la relation d'implication textuelle entre deux phrases. Le système est évalué sur une suite de tests comportant 2 520 paires test et les résultats montrent un gain en précision par rapport à un système de base variant entre 29.8% et 78.5% suivant la complexité des cas considérés

    Benchmarking for syntax-based sentential inference

    Get PDF
    International audienceWe propose a methodology for investigat- ing how well NLP systems handle mean- ing preserving syntactic variations. We start by presenting a method for the semi automated creation of a benchmark where entailment is mediated solely by meaning preserving syntactic variations. We then use this benchmark to compare a seman- tic role labeller and two grammar based RTE systems. We argue that the proposed methodology (i) supports a modular eval- uation of the ability of NLP systems to handle the syntax/semantic interface and (ii) permits focused error mining and er- ror analysis

    Semantic Normalisation : a Framework and an Experiment

    Get PDF
    International audienceWe present a normalisation framework for linguistic representations and illustrate its use by normalising the Stanford Dependency graphs (SDs) produced by the Stanford parser into Labelled Stanford Dependency graphs (LSDs). The normalised representations are evaluated both on a testsuite of constructed examples and on free text. The resulting representations improve on standard Predicate/Argument structures produced by SRL by combining role la- belling with the semantically oriented features of SDs. Furthermore, the proposed normalisa- tion framework opens the way to stronger normalisation processes which should be useful in reducing the burden on inference

    Noun/Verb Inference

    Get PDF
    International audienceWe present a system which combines logical inference with a semantic calculus producing normalised semantic representations that are robust to surface dierences which are irrelevant for entailment detection. We focus on the detection of entailment relations between sentence pairs involving noun/verb alternations and we show that the system correctly predicts a range of interactions between basic noun/verb predications and semantic phenomena such as quantication, negation and non factive contexts

    Mapping the Lexique des Verbes du Français (Lexicon of French Verbs) to a NLP Lexicon using Examples

    Get PDF
    International audienceThis article presents experiments aiming at mapping the Lexique des Verbes du Français (Lexicon of French Verbs) to FRILEX, a Natural Language Processing (NLP) lexicon based on DICOVALENCE. The two resources (Lexicon of French Verbs and DICOVALENCE) were built by linguists, based on very different theories, which makes a direct mapping nearly impossible. We chose to use the examples provided in one of the resource to find implicit links between the two and make them explicit

    Improving Simulations of MPI Applications Using A Hybrid Network Model with Topology and Contention Support

    Get PDF
    Proper modeling of collective communications is essential for understanding the behavior of medium-to-large scale parallel applications, and even minor deviations in implementation can adversely affect the prediction of real-world performance. We propose a hybrid network model extending LogP based approaches to account for topology and contention in high-speed TCP networks. This model is validated within SMPI, an MPI implementation provided by the SimGrid simulation toolkit. With SMPI, standard MPI applications can be compiled and run in a simulated network environment, and traces can be captured without incurring errors from tracing overheads or poor clock synchronization as in physical experiments. SMPI provides features for simulating applications that require large amounts of time or resources, including selective execution, ram folding, and off-line replay of execution traces. We validate our model by comparing traces produced by SMPI with those from other simulation platforms, as well as real world environments.Une bonne modélisation des communications collective est indispensable à la compréhension des performances des applications parallèles et des différences, même minimes, dans leur implémentation peut drastiquement modifier les performances escomptées. Nous proposons un modèle réseau hybrid étendant les approches de type LogP mais permettant de rendre compte de la topologie et de la contention pour les réseaux hautes performances utilisant TCP. Ce modèle est mis en oeuvre et validé au sein de SMPI, une implémentation de MPI fournie par l'environnement SimGrid. SMPI permet de compiler et d'exécuter sans modification des applications MPI dans un environnement simulé. Il est alors possible de capturer des traces sans l'intrusivité ni les problème de synchronisation d'horloges habituellement rencontrés dans des expériences réelles. SMPI permet également de simuler des applications gourmandes en mémoire ou en temps de calcul à l'aide de techniques telles l'exécution sélective, le repliement mémoire ou le rejeu hors-ligne de traces d'exécutions. Nous validons notre modèle en comparant les traces produites à l'aide de SMPI avec celles de traces d'exécution réelle. Nous montrons le gain obtenu en les comparant également à celles obtenues avec des modèles plus classiques utilisés dans des outils concurrents
    corecore